Goto

Collaborating Authors

 Nagasaki Prefecture


Dario Amodei's Oppenheimer Moment

The Atlantic - Technology

It came earlier than expected. More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, "Machines of Loving Grace," is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in "most relevant fields," and we'll have millions of them, a "country of geniuses," all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they're developed--or "grown," as staffers at Anthropic are fond of saying--in the correct way, they will decide to greatly improve our lives. Amodei does not explain precisely how the AIs will accomplish this.


Top-secret files reveal Americans were used as human guinea pigs in deadly radiation experiments

Daily Mail - Science & tech

Kentucky mother and daughter turn down $26.5MILLION to sell their farms to secretive tech giant that wants to build data center there Horrifying next twist in the Alexander brothers case: MAUREEN CALLAHAN exposes an unthinkable perversion that's been hiding in plain sight Hollywood icon who starred in Psycho after Hitchcock dubbed her'my new Grace Kelly' looks incredible at 95 Kylie Jenner's total humiliation in Hollywood: Derogatory rumor leaves her boyfriend's peers'laughing at her' behind her back Tucker Carlson erupts at Trump adviser as she hurls'SLANDER' claim linking him to synagogue shooting Ben Affleck'scores $600m deal' with Netflix to sell his AI film start-up Long hair over 45 is ageing and try-hard. I've finally cut mine off. Alexander brothers' alleged HIGH SCHOOL rape video: Classmates speak out on sickening footage... as creepy unseen photos are exposed Heartbreaking video shows very elderly DoorDash driver shuffle down customer's driveway with coffee order because he is too poor to retire Amber Valletta, 52, was a '90s Vogue model who made movies with Sandra Bullock and Kate Hudson, see her now Model Cindy Crawford, 60, mocked for her'out of touch' morning routine: 'Nothing about this is normal' Shocking declassified files have revealed how the US government intentionally injected Americans with radioactive substances without their knowledge or consent. This happened to 18 hospital patients between 1945 and 1947, where doctors secretly administered plutonium to study how it moved through and affected the human body as part of early US nuclear experiments during World War II and the Cold War. The chilling details originally came to light in 1995, when the Clinton White House had the Department of Energy disclose the secret experiments aimed at understanding radiation risks to workers building atomic bombs.




Swipe right for AI romance

The Japan Times

A screenshot of Loverse app shows an AI-generated woman, characterized as a 25-year-old hair and makeup artist Miyu, registered as a female companion. When artificial intelligence first started receiving attention around the end of 2022, Goki Kusunoki was tinkering around to see what kind of services he could create with the technology. One thing clicked for him after he created an image of an attractive woman with AI -- an AI companion -- and wondered what it would be like to engage in a conversation with her. "As I kept talking with her, I found that the conversations were more enjoyable than I had expected and as the exchanges continued, my feelings gradually grew -- at some point I caught myself thinking, 'I might actually like her,'" he recounted. In a time of both misinformation and too much information, quality journalism is more crucial than ever. By subscribing, you can help us get the story right.


NASA telescope will hunt down 'city killer' asteroids

Science

On a commercial thoroughfare in old town Pasadena, California, a stone's throw from NASA's Jet Propulsion Laboratory (JPL), you'll find the Neon Retro Arcade. Among its collection of vintage video games is the 1979 Atari classic Asteroids, in which a pixelated spaceship shoots down a barrage of space rocks to stave off fatal collisions. After long days of work at JPL, Amy Mainzer used to rack up high scores on that console. "It was a hoot," she says. It was also apt, considering she oversees a space mission designed to spot dangerous asteroids before they crash into Earth. That mission, the Near-Earth Object (NEO) Surveyor, was conceived in the early 2000s and finally got the green light in 2022. Its components are now being built, tested, and assembled in clean rooms across the United States ahead of its planned launch in September 2027. "We're in the thick of building everything," says Mainzer, NEO Surveyor's principal investigator and now an astronomer at the University of California, Los Angeles (UCLA).


The High Cost of Incivility: Quantifying Interaction Inefficiency via Multi-Agent Monte Carlo Simulations

Mangold, Benedikt

arXiv.org Artificial Intelligence

Workplace toxicity is widely recognized as detrimental to organizational culture, yet quantifying its direct impact on operational efficiency remains methodologically challenging due to the ethical and practical difficulties of reproducing conflict in human subjects. This study leverages Large Language Model (LLM) based Multi-Agent Systems to simulate 1-on-1 adversarial debates, creating a controlled "sociological sandbox". We employ a Monte Carlo method to simulate hundrets of discussions, measuring the convergence time (defined as the number of arguments required to reach a conclusion) between a baseline control group and treatment groups involving agents with "toxic" system prompts. Our results demonstrate a statistically significant increase of approximately 25\% in the duration of conversations involving toxic participants. We propose that this "latency of toxicity" serves as a proxy for financial damage in corporate and academic settings. Furthermore, we demonstrate that agent-based modeling provides a reproducible, ethical alternative to human-subject research for measuring the mechanics of social friction.


DEFEND: Poisoned Model Detection and Malicious Client Exclusion Mechanism for Secure Federated Learning-based Road Condition Classification

Liu, Sheng, Papadimitratos, Panos

arXiv.org Artificial Intelligence

Federated Learning (FL) has drawn the attention of the Intelligent Transportation Systems (ITS) community. FL can train various models for ITS tasks, notably camera-based Road Condition Classification (RCC), in a privacy-preserving collaborative way. However, opening up to collaboration also opens FL-based RCC systems to adversaries, i.e., misbehaving participants that can launch Targeted Label-Flipping Attacks (TLFAs) and threaten transportation safety. Adversaries mounting TLFAs poison training data to misguide model predictions, from an actual source class (e.g., wet road) to a wrongly perceived target class (e.g., dry road). Existing countermeasures against poisoning attacks cannot maintain model performance under TLFAs close to the performance level in attack-free scenarios, because they lack specific model misbehavior detection for TLFAs and neglect client exclusion after the detection. To close this research gap, we propose DEFEND, which includes a poisoned model detection strategy that leverages neuron-wise magnitude analysis for attack goal identification and Gaussian Mixture Model (GMM)-based clustering. DEFEND discards poisoned model contributions in each round and adapts accordingly client ratings, eventually excluding malicious clients. Extensive evaluation involving various FL-RCC models and tasks shows that DEFEND can thwart TLFAs and outperform seven baseline countermeasures, with at least 15.78% improvement, with DEFEND remarkably achieving under attack the same performance as in attack-free scenarios.


DIVIDE: A Framework for Learning from Independent Multi-Mechanism Data Using Deep Encoders and Gaussian Processes

Chawla, Vivek, Slautin, Boris, Pratiush, Utkarsh, Penumadu, Dayakar, Kalinin, Sergei

arXiv.org Artificial Intelligence

ABSTRACT Scientific datasets often arise from multiple independent mechanisms such as spati al, categorical or structural effects, whose combined influence obscures their individual contributions. We introduce DIVIDE, a framework that disentangles these influences by integrating mechanism - specific deep encoders with a structured Gaussian Process in a joint latent space. Disentanglement here refers to separating independently acting generative factors . The encoders isolate distinct mechanisms while the Gaussian Process captures their combined effect with calibrated uncertainty. The architecture supports structured priors, enabling interpretable and mechanism - aware prediction as well as efficient active l earning. Across benc hmarks, DIVIDE separates mechanisms, reproduces additive and scaled interactions, and remains robust under noise. The framework extends naturally to multifunctional datasets where mechanical, electromagnetic or optical responses coexist. INTRODUCTION Many real - world systems exhibit behavior driven by the combined influence of multiple independent mechanisms. These mechanisms may represent categorical factors, spatial dependencies, or nonlinear physical responses. While the scalar output of such systems is observable, the individual contributions of these mechanisms are often unknown and unmeasured. Modeling this type of data requires not only accurate predictions but also the ability to attribute variation in the output to specific, distinct sources. In thi s context, we use disentanglement to mean recovering those independently acting generative factors from observational data. Disentangling these contributions is particularly important in scientific and engineering domains where interpretability, causality, and mechanism - aware reasoning are essential. Partial solutions to this challenge have emerged from the field of disentangled representation learning, which seeks to identify independent factors of variation from high - dimensional data.